The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Support Vector Machines(25hit)

21-25hit(25hit)

  • Training Augmented Models Using SVMs

    Mark J.F. GALES  Martin I. LAYTON  

     
    INVITED PAPER

      Vol:
    E89-D No:3
      Page(s):
    892-899

    There has been significant interest in developing new forms of acoustic model, in particular models which allow additional dependencies to be represented than those contained within a standard hidden Markov model (HMM). This paper discusses one such class of models, augmented statistical models. Here, a local exponential approximation is made about some point on a base model. This allows additional dependencies within the data to be modelled than are represented in the base distribution. Augmented models based on Gaussian mixture models (GMMs) and HMMs are briefly described. These augmented models are then related to generative kernels, one approach used for allowing support vector machines (SVMs) to be applied to variable length data. The training of augmented statistical models within an SVM, generative kernel, framework is then discussed. This may be viewed as using maximum margin training to estimate statistical models. Augmented Gaussian mixture models are then evaluated using rescoring on a large vocabulary speech recognition task.

  • Geometrical Properties of Lifting-Up in the Nu Support Vector Machines

    Kazushi IKEDA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E89-D No:2
      Page(s):
    847-852

    Geometrical properties of the lifting-up technique in support vector machines (SVMs) are discussed here. In many applications, an SVM finds the optimal inhomogeneous separating hyperplane in terms of margins while some of the theoretical analyses on SVMs have treated only homogeneous hyperplanes for simplicity. Although they seem equivalent due to the so-called lifting-up technique, they differ in fact and the solution of the homogeneous SVM with lifting-up strongly depends on the parameter of lifting-up. It is also shown that the solution approaches that of the inhomogeneous SVM in the asymptotic case that the parameter goes to infinity.

  • Dialogue Speech Recognition by Combining Hierarchical Topic Classification and Language Model Switching

    Ian R. LANE  Tatsuya KAWAHARA  Tomoko MATSUI  Satoshi NAKAMURA  

     
    PAPER-Spoken Language Systems

      Vol:
    E88-D No:3
      Page(s):
    446-454

    An efficient, scalable speech recognition architecture combining topic detection and topic-dependent language modeling is proposed for multi-domain spoken language systems. In the proposed approach, the inferred topic is automatically detected from the user's utterance, and speech recognition is then performed by applying an appropriate topic-dependent language model. This approach enables users to freely switch between domains while maintaining high recognition accuracy. As topic detection is performed on a single utterance, detection errors may occur and propagate through the system. To improve robustness, a hierarchical back-off mechanism is introduced where detailed topic models are applied when topic detection is confident and wider models that cover multiple topics are applied in cases of uncertainty. The performance of the proposed architecture is evaluated when combined with two topic detection methods: unigram likelihood and SVMs (Support Vector Machines). On the ATR Basic Travel Expression Corpus, both methods provide a significant reduction in WER (9.7% and 10.3%, respectively) compared to a single language model system. Furthermore, recognition accuracy is comparable to performing decoding with all topic-dependent models in parallel, while the required computational cost is much reduced.

  • Sequential Fusion of Output Coding Methods and Its Application to Face Recognition

    Jaepil KO  Hyeran BYUN  

     
    PAPER-Face

      Vol:
    E87-D No:1
      Page(s):
    121-128

    In face recognition, simple classifiers are frequently used. For a robust system, it is common to construct a multi-class classifier by combining the outputs of several binary classifiers; this is called output coding method. The two basic output coding methods for this purpose are known as OnePerClass (OPC) and PairWise Coupling (PWC). The performance of output coding methods depends on accuracy of base dichotomizers. Support Vector Machine (SVM) is suitable for this purpose. In this paper, we review output coding methods and introduce a new sequential fusion method using SVM as a base classifier based on OPC and PWC according to their properties. In the experiments, we compare our proposed method with others. The experimental results show that our proposed method can improve the performance significantly on the real dataset.

  • SVM-Based Multi-Document Summarization Integrating Sentence Extraction with Bunsetsu Elimination

    Tsutomu HIRAO  Kazuhiro TAKEUCHI  Hideki ISOZAKI  Yutaka SASAKI  Eisaku MAEDA  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1702-1709

    In this paper, we propose a machine learning-based method of multi-document summarization integrating sentence extraction with bunsetsu elimination. We employ Support Vector Machines for both of the modules used. To evaluate the effect of bunsetsu elimination, we participated in the multi-document summarization task at TSC-2 by the following two approaches: (1) sentence extraction only, and (2) sentence extraction + bunsetsu elimination. The results of subjective evaluation at TSC-2 show that both approaches are superior to the Lead-based method from the viewpoint of information coverage. In addition, we made extracts from given abstracts to quantitatively examine the effectiveness of bunsetsu elimination. The experimental results showed that our bunsetsu elimination makes summaries more informative. Moreover, we found that extraction based on SVMs trained by short extracts are better than the Lead-based method, but that SVMs trained by long extracts are not.

21-25hit(25hit)